ai privacy risk
"We are not Future-ready": Understanding AI Privacy Risks and Existing Mitigation Strategies from the Perspective of AI Developers in Europe
Klymenko, Alexandra, Meisenbacher, Stephen, Kelley, Patrick Gage, Peddinti, Sai Teja, Thomas, Kurt, Matthes, Florian
The proliferation of AI has sparked privacy concerns related to training data, model interfaces, downstream applications, and more. We interviewed 25 AI developers based in Europe to understand which privacy threats they believe pose the greatest risk to users, developers, and businesses and what protective strategies, if any, would help to mitigate them. We find that there is little consensus among AI developers on the relative ranking of privacy risks. These differences stem from salient reasoning patterns that often relate to human rather than purely technical factors. Furthermore, while AI developers are aware of proposed mitigation strategies for addressing these risks, they reported minimal real-world adoption. Our findings highlight both gaps and opportunities for empowering AI developers to better address privacy risks in AI.
- Europe > Germany > Bavaria > Upper Bavaria > Munich (0.04)
- Europe > Ukraine (0.04)
- Europe > Sweden (0.04)
- (14 more...)
- Research Report > New Finding (1.00)
- Questionnaire & Opinion Survey (1.00)
- Personal > Interview (1.00)
- Overview (1.00)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Information Technology > Data Science > Data Mining (1.00)
- Information Technology > Artificial Intelligence > Natural Language (1.00)
- Information Technology > Artificial Intelligence > Issues > Social & Ethical Issues (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (0.68)
The Download: AI privacy risks, and cleaning up shipping
One of the biggest stories in tech this year has been the rise of large language models (LLMs). These are AI models that produce text a human might have written--sometimes so convincingly they have tricked people into thinking they are sentient. These models' power comes from troves of publicly available human-created text that has been hoovered from the internet. If you've posted anything even remotely personal in English on the internet, chances are your data might be part of some of the world's most popular LLMs. My colleague Melissa Heikkilä, our AI reporter, recently started to wonder what data these models might have on her--and how it could be misused. A bruising experience a decade ago left her paranoid about sharing personal details online, so she put OpenAI's GPT-3 to the test to see what it "knows" about her.
- Materials > Chemicals > Industrial Gases (1.00)
- Materials > Chemicals > Commodity Chemicals > Ammonia (0.40)